What is Artificial Intelligence
Artificial intelligence is considered an interdisciplinary science that originated from computer science and is related to some machines with human intelligence that can perform work. From a classification perspective, scientists generally classify artificial intelligence into two categories:
Strong AI, also known as Artificial General Intelligence, refers to intelligent machines that can solve untrained problems. This type of artificial intelligence does not yet exist, as it involves general scenarios and is currently difficult to obtain.
Weak AI (or Narrow AI): Refers to intelligent machines that operate in a limited environment and solve limited problems. Currently, all artificial intelligence can be classified as weak AI.
Strong artificial intelligence is the direction pursued by humans, while weak artificial intelligence is the collection of all artificial intelligence today. Although weak artificial intelligence can already efficiently complete tasks, they are still a long way from human intelligence.
The History of Artificial Intelligence
The artificial intelligence we refer to today actually appeared in the 1940s. In 1942, author Isaac Asimov published the Three Laws of Robotics, which stated that robots should not harm humans. In 1943, scientists published A Logical Calculus of Ideas Important in Neural Activity, proposing for the first time a mathematical model for constructing neural networks.
In the 1950s, the development of artificial intelligence proceeded rapidly. In 1950, scientist Alan Turing published Computing Machinery and Intelligence and proposed Turing Test to determine whether a machine is artificial intelligence. In the same year, SNARC, the world’s first neural Network Computer, was launched.
In 1956, the term artificial intelligence first appeared in the Dartmouth Summer Research Project, which is also considered the origin of artificial intelligence. In 1959, Arthur Samuel proposed the concept of machine learning, and the development of artificial intelligence entered a new stage.
However, in the 1970s and 1980s, the artificial intelligence industry faced two cold winters, as the development of the industry slowed down and the input-output balance in the market was uneven. With the subsequent technological progress and the application of the global internet in the 1990s, the cost of artificial intelligence development began to decline and ushered in a new peak of development.
In 1997, IBM’s Deep Blue defeated the then world chess champion, marking the first time that artificial intelligence had defeated humans in an intelligence game. In 2016, AlphaGo from the Google Deep Mind team defeated the Go world champion and demonstrated the power of artificial intelligence in more complex games.
At the end of 2022, Open AI launched ChatGPT 3, which once again sparked market enthusiasm for artificial intelligence research. Subsequently, Google launched Bard, Microsoft launched New Bing, and the emergence of ChatGPT 4 attracted the attention of billions of viewers worldwide for artificial intelligence.
Artificial Intelligence, Machine Learning, and Deep Learning
In research and media coverage in the field of artificial intelligence, machine learning and deep learning have a high frequency of occurrence.
Machine learning is a research direction in artificial intelligence, which refers to the use of algorithms to enable computers to automatically learn how to handle tasks without directly writing programs. Machine learning can also be divided into two categories, namely supervised learning, and unsupervised learning. The difference between the two is that supervised learning involves labeled datasets and output results, while unsupervised learning does not label datasets and results.
Deep learning is a type of machine learning that refers to the use of neural network architecture for learning. The neural network architecture draws inspiration from the biological neural network organization, and processes data by constructing hidden layers, ultimately connecting and weighting to obtain results.
The Future of Artificial Intelligence
Moore’s Law states that the number of transistors on a chip doubles every two years, while its production cost is halved. The application of this law for decades has provided fundamental conditions for the development of artificial intelligence, especially deep learning.
Scientists’ research shows that the innovation speed of artificial intelligence has even surpassed Moore’s Law. In the context of continuously declining costs, the development space of artificial intelligence in the future is broad.
Reference: